PHD Students
Hubert Kompanowski
Project Title
In the modern era of deep learning, recent developments in generative modeling have shown great promises in synthesizing photorealistic images and high-fidelity 3D models with high-level and fine-grained controls induced by text prompts via learning from large datasets. This research project aims at investigating generating 3D models from a cross-modality perspective, developing new techniques for realistic image synthesis and 3D synthesis that can serve as the building blocks for the next generation of 3D modeling and rendering tools. Particularly, we will target high-quality 3D model synthesis at object and scene level, investigating how generative adversarial neural networks and diffusion models can be applied for generating high-fidelity and realistic objects and scenes. As proof-of-concept applications, we will apply the developed techniques to rapid modeling of 3D scenes for AR/VR applications.Supervision Team
In the modern era of deep learning, recent developments in generative modeling have shown great promises in synthesizing photorealistic images and high-fidelity 3D models with high-level and fine-grained controls induced by text prompts via learning from large datasets. This research project aims at investigating generating 3D models from a cross-modality perspective, developing new techniques for realistic image synthesis and 3D synthesis that can serve as the building blocks for the next generation of 3D modeling and rendering tools. Particularly, we will target high-quality 3D model synthesis at object and scene level, investigating how generative adversarial neural networks and diffusion models can be applied for generating high-fidelity and realistic objects and scenes. As proof-of-concept applications, we will apply the developed techniques to rapid modeling of 3D scenes for AR/VR applications.
Description
In the modern era of deep learning, recent developments in generative modeling have shown great promises in synthesizing photorealistic images and high-fidelity 3D models with high-level and fine-grained controls induced by text prompts via learning from large datasets. This research project aims at investigating generating 3D models from a cross-modality perspective, developing new techniques for realistic image synthesis and 3D synthesis that can serve as the building blocks for the next generation of 3D modeling and rendering tools. Particularly, we will target high-quality 3D model synthesis at object and scene level, investigating how generative adversarial neural networks and diffusion models can be applied for generating high-fidelity and realistic objects and scenes. As proof-of-concept applications, we will apply the developed techniques to rapid modeling of 3D scenes for AR/VR applications.